Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 11 de 11
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
IEEE Open J Eng Med Biol ; 5: 54-58, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38487094

RESUMEN

Goal: Distance information is highly requested in assistive smartphone Apps by people who are blind or low vision (PBLV). However, current techniques have not been evaluated systematically for accuracy and usability. Methods: We tested five smartphone-based distance-estimation approaches in the image center and periphery at 1-3 meters, including machine learning (CoreML), infrared grid distortion (IR_self), light detection and ranging (LiDAR_back), and augmented reality room-tracking on the front (ARKit_self) and back-facing cameras (ARKit_back). Results: For accuracy in the image center, all approaches had <±2.5 cm average error, except CoreML which had ±5.2-6.2 cm average error at 2-3 meters. In the periphery, all approaches were more inaccurate, with CoreML and IR_self having the highest average errors at ±41 cm and ±32 cm respectively. For usability, CoreML fared favorably with the lowest central processing unit usage, second lowest battery usage, highest field-of-view, and no specialized sensor requirements. Conclusions: We provide key information that helps design reliable smartphone-based visual assistive technologies to enhance the functionality of PBLV.

2.
Artículo en Inglés | MEDLINE | ID: mdl-38082714

RESUMEN

Recent object detection models show promising advances in their architecture and performance, expanding potential applications for the benefit of persons with blindness or low vision (pBLV). However, object detection models are usually trained on generic data rather than datasets that focus on the needs of pBLV. Hence, for applications that locate objects of interest to pBLV, object detection models need to be trained specifically for this purpose. Informed by prior interviews, questionnaires, and Microsoft's ORBIT research, we identified thirty-five objects pertinent to pBLV. We employed this user-centric feedback to gather images of these objects from the Google Open Images V6 dataset. We subsequently trained a YOLOv5x model with this dataset to recognize these objects of interest. We demonstrate that the model can identify objects that previous generic models could not, such as those related to tasks of daily functioning - e.g., coffee mug, knife, fork, and glass. Crucially, we show that careful pruning of a dataset with severe class imbalances leads to a rapid, noticeable improvement in the overall performance of the model by two-fold, as measured using the mean average precision at the intersection over union thresholds from 0.5 to 0.95 (mAP50-95). Specifically, mAP50-95 improved from 0.14 to 0.36 on the seven least prevalent classes in the training dataset. Overall, we show that careful curation of training data can improve training speed and object detection outcomes. We show clear directions on effectively customizing training data to create models that focus on the desires and needs of pBLV.Clinical Relevance- This work demonstrated the benefits of developing assistive AI technology customized to individual users or the wider BLV community.


Asunto(s)
Dispositivos de Autoayuda , Baja Visión , Personas con Daño Visual , Humanos , Ceguera , Cabeza
3.
Neuroscientist ; 29(1): 117-138, 2023 02.
Artículo en Inglés | MEDLINE | ID: mdl-34382456

RESUMEN

The visual system retains profound plastic potential in adulthood. In the current review, we summarize the evidence of preserved plasticity in the adult visual system during visual perceptual learning as well as both monocular and binocular visual deprivation. In each condition, we discuss how such evidence reflects two major cellular mechanisms of plasticity: Hebbian and homeostatic processes. We focus on how these two mechanisms work together to shape plasticity in the visual system. In addition, we discuss how these two mechanisms could be further revealed in future studies investigating cross-modal plasticity in the visual system.


Asunto(s)
Plasticidad Neuronal , Corteza Visual , Adulto , Humanos , Homeostasis
4.
Annu Int Conf IEEE Eng Med Biol Soc ; 2021: 5868-5871, 2021 11.
Artículo en Inglés | MEDLINE | ID: mdl-34892454

RESUMEN

Sensory substitution devices (SSDs) such as the 'vOICe' preserve visual information in sound by turning visual height, brightness, and laterality into auditory pitch, volume, and panning/time respectively. However, users have difficulty identifying or tracking multiple simultaneously presented tones - a skill necessary to discriminate the upper and lower edges of object shapes. We explore how these deficits can be addressed by using image-sonifications inspired by auditory scene analysis (ASA). Here, sighted subjects (N=25) of varying musical experience listened to, and then reconstructed, complex shapes consisting of simultaneously presented upper and lower lines. Complex shapes were sonified using the vOICe, with either the upper and lower lines varying only in pitch (i.e. the vOICe's 'unaltered' default settings), or with one line degraded to alter its auditory timbre or volume. Results found that overall performance increased with subjects' years of prior musical experience. ANOVAs revealed that both sonification style and musical experience significantly affected performance, but with no interaction effect between them. Compared to the vOICe's 'unaltered' pitch-height mapping, subjects had significantly better image-reconstruction abilities when the lower line was altered via timbre or volume-modulation. By contrast, altering the upper line only helped users identify the unaltered lower line. In conclusion, adding ASA principles to vision-to-audio SSDs boosts subjects' image-reconstruction abilities, even if this also reduces total task-relevant information. Future SSDs should seek to exploit these findings to enhance both novice user abilities and the use of SSDs as visual rehabilitation tools.


Asunto(s)
Percepción Auditiva , Procesamiento de Imagen Asistido por Computador , Lateralidad Funcional , Humanos , Sonido , Visión Ocular
6.
Cognition ; 175: 114-121, 2018 06.
Artículo en Inglés | MEDLINE | ID: mdl-29502009

RESUMEN

Cross-modal correspondences describe the widespread tendency for attributes in one sensory modality to be consistently matched to those in another modality. For example, high pitched sounds tend to be matched to spiky shapes, small sizes, and high elevations. However, the extent to which these correspondences depend on sensory experience (e.g. regularities in the perceived environment) remains controversial. Two recent studies involving blind participants have argued that visual experience is necessary for the emergence of correspondences, wherein such correspondences were present (although attenuated) in late blind individuals but absent in the early blind. Here, using a similar approach and a large sample of early and late blind participants (N = 59) and sighted controls (N = 63), we challenge this view. Examining five auditory-tactile correspondences, we show that only one requires visual experience to emerge (pitch-shape), two are independent of visual experience (pitch-size, pitch-weight), and two appear to emerge in response to blindness (pitch-texture, pitch-softness). These effects tended to be more pronounced in the early blind than late blind group, and the duration of vision loss among the late blind did not mediate the strength of these correspondences. Our results suggest that altered sensory input can affect cross-modal correspondences in a more complex manner than previously thought and cannot solely be explained by a reduction in visually-mediated environmental correlations. We propose roles of visual calibration, neuroplasticity and structurally-innate associations in accounting for our findings.


Asunto(s)
Percepción Auditiva/fisiología , Ceguera/fisiopatología , Percepción Visual/fisiología , Estimulación Acústica , Adulto , Femenino , Humanos , Masculino , Persona de Mediana Edad , Estimulación Luminosa , Percepción de la Altura Tonal/fisiología , Adulto Joven
7.
Front Psychol ; 9: 2492, 2018.
Artículo en Inglés | MEDLINE | ID: mdl-30618928

RESUMEN

It is accepted knowledge that, for a given equivalent sound pressure level, sounds produced by planes are worse received from local communities than other sources related to transportation. Very little is known on the reasons for this special status, including any interactions that non-acoustical factors may have in listener assessments. Here we focus on one of such factors, the multisensory aspect of aircraft events. We propose a method to assess the visual impact of perceived aircraft height and size, beyond the objective increase in sound pressure level for a plane flying lower than another. We utilize a soundscape approach, based on acoustical indicators (dBs, L A, max, background sound pressure level) and social surveys: a combination of postal questionnaires (related to long-term exposure) and field interviews (related to the contextual perception), complementing well-established questions with others designed to capture new multisensory relationships. For the first time, we report how the perceived visual height of airplanes can be established using a combination of visual size, airplane size, reading distance, and airplane distance. Visual and acoustic assessments are complemented and contextualized by additional questions probing the subjective, objective, and descriptive assessments made by observers as well as how changes in airplane height over time may have influenced these perceptions. The flexibility of the proposed method allows a comparison of how participant reporting can vary across live viewing and memory recall conditions, allowing an examination of listeners' acoustic memory and expectations. The compresence of different assessment methods allows a comparison between the "objective" and the "perceptual" sphere and helps underscore the multisensory nature of observers' perceptual and emotive evaluations. In this study, we discuss pro and cons of our method, as assessed during a community survey conducted in the summer 2017 around Gatwick airport, and compare the different assessments of the community perception.

8.
Multisens Res ; 30(3-5): 337-362, 2017 Jan 01.
Artículo en Inglés | MEDLINE | ID: mdl-31287083

RESUMEN

There is a widespread tendency to associate certain properties of sound with those of colour (e.g., higher pitches with lighter colours). Yet it is an open question how sound influences chroma or hue when properly controlling for lightness. To examine this, we asked participants to adjust physically equiluminant colours until they 'went best' with certain sounds. For pure tones, complex sine waves and vocal timbres, increases in frequency were associated with increases in chroma. Increasing the loudness of pure tones also increased chroma. Hue associations varied depending on the type of stimuli. In stimuli that involved only limited bands of frequencies (pure tones, vocal timbres), frequency correlated with hue, such that low frequencies gave blue hues and progressed to yellow hues at 800 Hz. Increasing the loudness of a pure tone was also associated with a shift from blue to yellow. However, for complex sounds that share the same bandwidth of frequencies (100-3200 Hz) but that vary in terms of which frequencies have the most power, all stimuli were associated with yellow hues. This suggests that the presence of high frequencies (above 800 Hz) consistently yields yellow hues. Overall we conclude that while pitch-chroma associations appear to flexibly re-apply themselves across a variety of contexts, frequencies above 800 Hz appear to produce yellow hues irrespective of context. These findings reveal new sound-colour correspondences previously obscured through not controlling for lightness. Findings are discussed in relation to understanding the underlying rules of cross-modal correspondences, synaesthesia, and optimising the sensory substitution of visual information through sound.

9.
Artículo en Inglés | MEDLINE | ID: mdl-28080971

RESUMEN

Interoception refers to the sensing of signals concerning the internal state of the body. Individual differences in interoceptive sensitivity are proposed to account for differences in affective processing, including the expression of anxiety. The majority of investigations of interoceptive accuracy focus on cardiac signals, typically using heartbeat detection tests and self-report measures. Consequently, little is known about how different organ-specific axes of interoception relate to each other or to symptoms of anxiety. Here, we compare interoception for cardiac and respiratory signals. We demonstrate a dissociation between cardiac and respiratory measures of interoceptive accuracy (i.e. task performance), yet a positive relationship between cardiac and respiratory measures of interoceptive awareness (i.e. metacognitive insight into own interoceptive ability). Neither interoceptive accuracy nor metacognitive awareness for cardiac and respiratory measures was related to touch acuity, an exteroceptive sense. Specific measures of interoception were found to be predictive of anxiety symptoms. Poor respiratory accuracy was associated with heightened anxiety score, while good metacognitive awareness for cardiac interoception was associated with reduced anxiety. These findings highlight that detection accuracies across different sensory modalities are dissociable and future work can better delineate their relationship to affective and cognitive constructs.This article is part of the themed issue 'Interoception beyond homeostasis: affect, cognition and mental health'.


Asunto(s)
Ansiedad/fisiopatología , Frecuencia Cardíaca , Interocepción , Frecuencia Respiratoria , Percepción del Tacto , Adulto , Concienciación , Inglaterra , Femenino , Humanos , Masculino , Persona de Mediana Edad , Adulto Joven
10.
Multisens Res ; 29(4-5): 337-63, 2016.
Artículo en Inglés | MEDLINE | ID: mdl-29384607

RESUMEN

Visual sensory substitution devices (SSDs) can represent visual characteristics through distinct patterns of sound, allowing a visually impaired user access to visual information. Previous SSDs have avoided colour and when they do encode colour, have assigned sounds to colour in a largely unprincipled way. This study introduces a new tablet-based SSD termed the 'Creole' (so called because it combines tactile scanning with image sonification) and a new algorithm for converting colour to sound that is based on established cross-modal correspondences (intuitive mappings between different sensory dimensions). To test the utility of correspondences, we examined the colour­sound associative memory and object recognition abilities of sighted users who had their device either coded in line with or opposite to sound­colour correspondences. Improved colour memory and reduced colour-errors were made by users who had the correspondence-based mappings. Interestingly, the colour­sound mappings that provided the highest improvements during the associative memory task also saw the greatest gains for recognising realistic objects that also featured these colours, indicating a transfer of abilities from memory to recognition. These users were also marginally better at matching sounds to images varying in luminance, even though luminance was coded identically across the different versions of the device. These findings are discussed with relevance for both colour and correspondences for sensory substitution use.


Asunto(s)
Percepción Auditiva/fisiología , Percepción de Color/fisiología , Auxiliares Sensoriales , Sonido , Adolescente , Algoritmos , Femenino , Humanos , Masculino , Umbral Sensorial , Adulto Joven
11.
Multisens Res ; 26(6): 503-32, 2013.
Artículo en Inglés | MEDLINE | ID: mdl-24800410

RESUMEN

Visual sensory substitution devices (SSDs) allow visually-deprived individuals to navigate and recognise the 'visual world'; SSDs also provide opportunities for psychologists to study modality-independent theories of perception. At present most research has focused on encoding greyscale vision. However at the low spatial resolutions received by SSD users, colour information enhances object-ground segmentation, and provides more stable cues for scene and object recognition. Many attempts have been made to encode colour information in tactile or auditory modalities, but many of these studies exist in isolation. This review brings together a wide variety of tactile and auditory approaches to representing colour. We examine how each device constructs 'colour' relative to veridical human colour perception and report previous experiments using these devices. Theoretical approaches to encoding and transferring colour information through sound or touch are discussed for future devices, covering alternative stimulation approaches, perceptually distinct dimensions and intuitive cross-modal correspondences.


Asunto(s)
Percepción Auditiva/fisiología , Percepción de Color/fisiología , Auxiliares Sensoriales , Percepción del Tacto/fisiología , Humanos , Umbral Sensorial
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...